Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition
نویسندگان
چکیده
This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear) are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak) are detected by two support vector machines (SVM) classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources.
منابع مشابه
Evidence Theory-Based Multimodal Emotion Recognition
Automatic recognition of human affective states is still a largely unexplored and challenging topic. Even more issues arise when dealing with variable quality of the inputs or aiming for real-time, unconstrained, and person independent scenarios. In this paper, we explore audio-visual multimodal emotion recognition. We present SAMMI, a framework designed to extract real-time emotion appraisals ...
متن کاملEducational Facial Emotion Recognition in Children With Autism Spectrum Disorder: A Clinical Trial Study
Objective: The disability to recognize facial emotions is one of the behavioral problems in autistic children. This study was designed to evaluate the effect of education on the promotion of face recognition. Methods: This single-blind clinical trial study was conducted on children with autism. The participants were allocated with random sampling to the two groups. Autistic children in the int...
متن کاملTowards Formal Multimodal Analysis of Emotions for Affective Computing
Social robotics is related to the robotic systems and human interaction. Social robots have applications in elderly care, health care, home care, customer service and reception in industrial settings. Human-Robot Interaction (HRI) requires better understanding of human emotion. There are few multimodal fusion systems that integrate limited amount of facial expression, speech and gesture analysi...
متن کاملMultimodal Emotion Recognition Integrating Affective Speech with Facial Expression
In recent years, emotion recognition has attracted extensive interest in signal processing, artificial intelligence and pattern recognition due to its potential applications to human-computer-interaction (HCI). Most previously published works in the field of emotion recognition devote to performing emotion recognition by using either affective speech or facial expression. However, Affective spe...
متن کاملCombining Eye Movements and EEG to Enhance Emotion Recognition
In this paper, we adopt a multimodal emotion recognition framework by combining eye movements and electroencephalography (EEG) to enhance emotion recognition. The main contributions of this paper are twofold. a) We investigate sixteen eye movements related to emotions and identify the intrinsic patterns of these eye movements for three emotional states: positive, neutral and negative. b) We exa...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
دوره 2017 شماره
صفحات -
تاریخ انتشار 2017